1,971 research outputs found

    A Reproducible Study on Remote Heart Rate Measurement

    Get PDF
    This paper studies the problem of reproducible research in remote photoplethysmography (rPPG). Most of the work published in this domain is assessed on privately-owned databases, making it difficult to evaluate proposed algorithms in a standard and principled manner. As a consequence, we present a new, publicly available database containing a relatively large number of subjects recorded under two different lighting conditions. Also, three state-of-the-art rPPG algorithms from the literature were selected, implemented and released as open source free software. After a thorough, unbiased experimental evaluation in various settings, it is shown that none of the selected algorithms is precise enough to be used in a real-world scenario

    BEAT: An Open-Source Web-Based Open-Science Platform

    Get PDF
    With the increased interest in computational sciences, machine learning (ML), pattern recognition (PR) and big data, governmental agencies, academia and manufacturers are overwhelmed by the constant influx of new algorithms and techniques promising improved performance, generalization and robustness. Sadly, result reproducibility is often an overlooked feature accompanying original research publications, competitions and benchmark evaluations. The main reasons behind such a gap arise from natural complications in research and development in this area: the distribution of data may be a sensitive issue; software frameworks are difficult to install and maintain; Test protocols may involve a potentially large set of intricate steps which are difficult to handle. Given the raising complexity of research challenges and the constant increase in data volume, the conditions for achieving reproducible research in the domain are also increasingly difficult to meet. To bridge this gap, we built an open platform for research in computational sciences related to pattern recognition and machine learning, to help on the development, reproducibility and certification of results obtained in the field. By making use of such a system, academic, governmental or industrial organizations enable users to easily and socially develop processing toolchains, re-use data, algorithms, workflows and compare results from distinct algorithms and/or parameterizations with minimal effort. This article presents such a platform and discusses some of its key features, uses and limitations. We overview a currently operational prototype and provide design insights.Comment: References to papers published on the platform incorporate

    Deepfake detection: humans vs. machines

    Full text link
    Deepfake videos, where a person's face is automatically swapped with a face of someone else, are becoming easier to generate with more realistic results. In response to the threat such manipulations can pose to our trust in video evidence, several large datasets of deepfake videos and many methods to detect them were proposed recently. However, it is still unclear how realistic deepfake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. In this paper, we present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not. For the evaluation, we used 120 different videos (60 deepfakes and 60 originals) manually pre-selected from the Facebook deepfake database, which was provided in the Kaggle's Deepfake Detection Challenge 2020. For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na\"ive subjects. The results of the subjective evaluation were compared with the performance of two different state of the art deepfake detection methods, based on Xception and EfficientNets (B4 variant) neural networks, which were pre-trained on two other large public databases: the Google's subset from FaceForensics++ and the recent Celeb-DF dataset. The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect those deepfake videos, which human subjects found to be very easy to spot

    Are we overlooking Lepton Flavour Universal New Physics in b→sℓℓb\to s\ell\ell ?

    Get PDF
    The deviations with respect to the Standard Model (SM) that are currently observed in b→sℓℓb \to s \ell\ell transitions (the so-called flavour anomalies) can be interpreted in terms of different New Physics (NP) scenarios within a model-independent effective approach. We reconsider the determination of NP in global fits from a different perspective by removing one implicit hypothesis of current analyses, namely that NP is only Lepton-Flavour Universality Violating (LFUV). We examine the roles played by LFUV NP and Lepton-Flavour Universal (LFU) NP altogether, providing new directions to identify the possible theory beyond the SM responsible for the anomalies observed. New patterns of NP emerge due to the possibility of allowing at the same time large LFUV and LFU NP contributions to C10μC_{10\mu}, which provides a different mechanism to obey the constraint from the Bs→μ+μ−B_s \to\mu^+\mu^- branching ratio. In this landscape of NP, we discuss how to discriminate among these scenarios in the short term thanks to current and forthcoming observables. While the update of RKR_K will be a major milestone to confirm the NP origin of the flavour anomalies, additional observables, in particular the LFUV angular observable Q5Q_5, turn out to be central to assess the precise NP scenario responsible for the observed anomalies.Comment: 9 pages. v2 matches the published versio

    BEAT: Web-based open science platorm for all

    Get PDF
    This presentation was part of Day 5 of the Open Science in Practice Summer School #osip2017. Additional information can be found on osip2017.epfl.ch

    Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm.

    Get PDF
    The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping

    Fairness in Biometrics: a figure of merit to assess biometric verification systems

    Full text link
    Machine learning-based (ML) systems are being largely deployed since the last decade in a myriad of scenarios impacting several instances in our daily lives. With this vast sort of applications, aspects of fairness start to rise in the spotlight due to the social impact that this can get in minorities. In this work aspects of fairness in biometrics are addressed. First, we introduce the first figure of merit that is able to evaluate and compare fairness aspects between multiple biometric verification systems, the so-called Fairness Discrepancy Rate (FDR). A use case with two synthetic biometric systems is introduced and demonstrates the potential of this figure of merit in extreme cases of fair and unfair behavior. Second, a use case using face biometrics is presented where several systems are evaluated compared with this new figure of merit using three public datasets exploring gender and race demographics.Comment: 11 page

    Image quality assessment for fake biometric detection: Application to Iris, fingerprint, and face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.This work has been partially supported by projects Contexts (S2009/TIC-1485) from CAM, Bio-Shield (TEC2012-34881) from Spanish MECD, TABULA RASA (FP7-ICT-257289) and BEAT (FP7-SEC-284989) from EU, and Cátedra UAM-Telefónic

    Biometric antispoofing methods: A survey in face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. Galbally, S. Marcel and J. Fierrez, "Biometric Antispoofing Methods", IEEE Access, vol.2, pp. 1530-1552, Dec. 2014In recent decades, we have witnessed the evolution of biometric technology from the rst pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as ngerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoo ng. Spoo ng, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing ef cient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging eld of antispoo ng, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active eld of research.This work was supported in part by the CAM under Project S2009/TIC-1485, in part by the Ministry of Economy and Competitiveness through the Bio-Shield Project under Grant TEC2012-34881, in part by the TABULA RASA Project under Grant FP7-ICT-257289, in part by the BEAT Project under Grant FP7-SEC-284989 through the European Union, and in part by the Cátedra Universidad Autónoma de Madrid-Telefónica
    • …
    corecore